$l_{2,p}$ Matrix Norm and Its Application in Feature Selection

نویسندگان

  • Liping Wang
  • Songcan Chen
چکیده

Recently, l2,1 matrix norm has been widely applied to many areas such as computer vision, pattern recognition, biological study and etc. As an extension of l1 vector norm, the mixed l2,1 matrix norm is often used to find jointly sparse solutions. Moreover, an efficient iterative algorithm has been designed to solve l2,1-norm involved minimizations. Actually, computational studies have showed that lp-regularization (0 < p < 1) is sparser than l1-regularization, but the extension to matrix norm has been seldom considered. This paper presents a definition of mixed l2,p (p ∈ (0, 1]) matrix pseudo norm which is thought as both generalizations of lp vector norm to matrix and l2,1-norm to nonconvex cases (0 < p < 1). Fortunately, an efficient unified algorithm is proposed to solve the induced l2,pnorm (p ∈ (0, 1]) optimization problems. The convergence can also be uniformly demonstrated for all p ∈ (0, 1]. Typical p ∈ (0, 1] are applied to select features in computational biology and the experimental results show that some choices of 0 < p < 1 do improve the sparse pattern of using p = 1.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Direct l_(2, p)-Norm Learning for Feature Selection

In this paper , we propose a novel sparse learning based feature selection method that directly optimizes a large margin linear classification model’s sparsity with -norm ( ) subject to data-fitt ing constraints, rather than using the sparsity as a regularizat ion term. To solve the d irect sparsity opt imizat ion p rob lem that is non-s mooth and non-convex when , we prov ide an efficient iter...

متن کامل

Developing a Filter-Wrapper Feature Selection Method and its Application in Dimension Reduction of Gen Expression

Nowadays, increasing the volume of data and the number of attributes in the dataset has reduced the accuracy of the learning algorithm and the computational complexity. A dimensionality reduction method is a feature selection method, which is done through filtering and wrapping. The wrapper methods are more accurate than filter ones but perform faster and have a less computational burden. With ...

متن کامل

Effective Discriminative Feature Selection with Non-trivial Solutions

Feature selection and feature transformation, the two main ways to reduce dimensionality, are often presented separately. In this paper, a feature selection method is proposed by combining the popular transformation based dimensionality reduction method Linear Discriminant Analysis (LDA) and sparsity regularization. We impose row sparsity on the transformation matrix of LDA through l2,1-norm re...

متن کامل

Uncorrelated Group LASSO

`2,1-norm is an effective regularization to enforce a simple group sparsity for feature learning. To capture some subtle structures among feature groups, we propose a new regularization called exclusive group `2,1-norm. It enforces the sparsity at the intra-group level by using `2,1-norm, while encourages the selected features to distribute in different groups by using `2 norm at the inter-grou...

متن کامل

Online Streaming Feature Selection Using Geometric Series of the Adjacency Matrix of Features

Feature Selection (FS) is an important pre-processing step in machine learning and data mining. All the traditional feature selection methods assume that the entire feature space is available from the beginning. However, online streaming features (OSF) are an integral part of many real-world applications. In OSF, the number of training examples is fixed while the number of features grows with t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1303.3987  شماره 

صفحات  -

تاریخ انتشار 2013